The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Timely and effective feedback within surgical training plays a critical role in developing the skills required to perform safe and efficient surgery. Feedback from expert surgeons, while especially valuable in this regard, is challenging to acquire due to their typically busy schedules, and may be subject to biases. Formal assessment procedures like OSATS and GEARS attempt to provide objective measures of skill, but remain time-consuming. With advances in machine learning there is an opportunity for fast and objective automated feedback on technical skills. The SimSurgSkill 2021 challenge (hosted as a sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in this endeavor. Using virtual reality (VR) surgical tasks, competitors were tasked with localizing instruments and predicting surgical skill. Here we summarize the winning approaches and how they performed. Using this publicly available dataset and results as a springboard, future work may enable more efficient training of surgeons with advances in surgical data science. The dataset can be accessed from https://console.cloud.google.com/storage/browser/isi-simsurgskill-2021.
translated by 谷歌翻译
A real-world application or setting involves interaction between different modalities (e.g., video, speech, text). In order to process the multimodal information automatically and use it for an end application, Multimodal Representation Learning (MRL) has emerged as an active area of research in recent times. MRL involves learning reliable and robust representations of information from heterogeneous sources and fusing them. However, in practice, the data acquired from different sources are typically noisy. In some extreme cases, a noise of large magnitude can completely alter the semantics of the data leading to inconsistencies in the parallel multimodal data. In this paper, we propose a novel method for multimodal representation learning in a noisy environment via the generalized product of experts technique. In the proposed method, we train a separate network for each modality to assess the credibility of information coming from that modality, and subsequently, the contribution from each modality is dynamically varied while estimating the joint distribution. We evaluate our method on two challenging benchmarks from two diverse domains: multimodal 3D hand-pose estimation and multimodal surgical video segmentation. We attain state-of-the-art performance on both benchmarks. Our extensive quantitative and qualitative evaluations show the advantages of our method compared to previous approaches.
translated by 谷歌翻译
在微创手术中,视频分析的手术工作流程分割是一个经过深入研究的主题。常规方法将其定义为多类分类问题,其中各个视频帧被归因于手术期标签。我们引入了一种新颖的加强学习公式,以用于离线相过渡检索。我们没有尝试对每个视频框架进行分类,而是确定每个相转换的时间戳。通过构造,我们的模型不会产生虚假和嘈杂的相变,而是相邻的相位块。我们研究了该模型的两种不同配置。第一个不需要在视频中处理所有帧(在2个不同的应用程序中仅<60%和<20%的帧),而在最新的精度下略微产生结果。第二个配置处理所有视频帧,并以可比的计算成本优于最新技术。 We compare our method against the recent top-performing frame-based approaches TeCNO and Trans-SVNet on the public dataset Cholec80 and also on an in-house dataset of laparoscopic sacrocolpopexy.我们同时执行基于框架的(准确性,精度,召回和F1得分),也可以对我们的算法进行基于事件的(事件比率)评估。
translated by 谷歌翻译
在双胞胎输血综合征(TTTS)中,单座管胎盘中的异常血管吻合可能会在两个胎儿之间产生不均匀的流量。在当前的实践中,通过使用激光消融闭合异常吻合来对TTT进行手术治疗。该手术在最小的侵入性中依赖于胎儿镜检查。有限的视野使吻合术识别成为外科医生的具有挑战性的任务。为了应对这一挑战,我们提出了一个基于学习的框架,用于视野扩展的体内胎儿镜框架注册。该框架的新颖性依赖于基于学习的关键点提案网络以及基于胎儿镜图像细分和(ii)不一致的同符的编码策略(i)无关的关键点。我们在来自6个不同女性的6个TTT手术的6个术中序列的数据集中验证了我们的框架,这是根据最新的最新算法状态,该算法依赖于胎盘血管的分割。与艺术的状态相比,提出的框架的性能更高,为稳健的马赛克在TTTS手术期间提供背景意识铺平了道路。
translated by 谷歌翻译
超声检查的胎儿生长评估是基于一些生物特征测量,这些测量是手动进行并相对于预期的妊娠年龄进行的。可靠的生物特征估计取决于标准超声平面中地标的精确检测。手动注释可能是耗时的和依赖操作员的任务,并且可能导致高测量可变性。现有的自动胎儿生物特征法的方法依赖于初始自动胎儿结构分割,然后是几何标记检测。但是,分割注释是耗时的,可能是不准确的,具有里程碑意义的检测需要开发特定于测量的几何方法。本文描述了Biometrynet,这是一个克服这些局限性的胎儿生物特征估计的端到端地标回归框架。它包括一种新型的动态定向测定(DOD)方法,用于在网络训练过程中执行测量特定方向的一致性。 DOD可降低网络训练中的变异性,提高标志性的定位精度,从而产生准确且健壮的生物特征测量。为了验证我们的方法,我们组装了一个来自1,829名受试者的3,398张超声图像的数据集,这些受试者在三个具有七个不同超声设备的临床部位收购。在两个独立数据集上的三个不同生物识别测量值的比较和交叉验证表明,生物元网络是稳健的,并且产生准确的测量结果,其误差低于临床上允许的误差,优于其他现有的自动化生物测定估计方法。代码可从https://github.com/netanellavisdris/fetalbiometry获得。
translated by 谷歌翻译
胎儿镜检查激光​​光凝是一种广泛采用的方法,用于治疗双胞胎输血综合征(TTTS)。该过程涉及光凝病理吻合术以调节双胞胎之间的血液交换。由于观点有限,胎儿镜的可操作性差,可见性差和照明的可变性,因此该程序尤其具有挑战性。这些挑战可能导致手术时间增加和消融不完全。计算机辅助干预措施(CAI)可以通过识别场景中的关键结构并通过视频马赛克来扩展胎儿镜观景领域,从而为外科医生提供决策支持和背景意识。由于缺乏设计,开发和测试CAI算法的高质量数据,该领域的研究受到了阻碍。通过作为MICCAI2021内窥镜视觉挑战组织的胎儿镜胎盘胎盘分割和注册(FETREG2021)挑战,我们发布了第一个Largescale Multencentre TTTS数据集,用于开发广义和可靠的语义分割和视频摩擦质量algorithms。对于这一挑战,我们发布了一个2060张图像的数据集,该数据集是从18个体内TTTS胎儿镜检查程序和18个简短视频剪辑的船只,工具,胎儿和背景类别的像素通道。七个团队参与了这一挑战,他们的模型性能在一个看不见的测试数据集中评估了658个从6个胎儿镜程序和6个短剪辑的图像的图像。这项挑战为创建通用解决方案提供了用于胎儿镜面场景的理解和摩西式解决方案的机会。在本文中,我们介绍了FETREG2021挑战的发现,以及报告TTTS胎儿镜检查中CAI的详细文献综述。通过这一挑战,它的分析和多中心胎儿镜数据的发布,我们为该领域的未来研究提供了基准。
translated by 谷歌翻译
背景:荧光血管造影表现出非常有希望的结果,可以通过允许外科医生选择最佳灌注组织来减少吻合泄漏。但是,由于存在不同外科医生之间的显着差异,因此对荧光信号的主观解释仍然阻碍了该技术的广泛应用。我们的目的是开发一种人工智能算法,以基于术中荧光血管造影数据将结肠组织分类为“灌注”或“不灌注”。方法:在第三纪转介中心的荧光血管造影视频数据集中对具有重新结构结构的分类模型进行了训练。与结肠的荧光和非荧光段相对应的框架用于训练分类算法。进行了使用训练集未使用的患者的框架进行验证,包括使用相同的设备和使用其他相机收集的数据收集的数据。计算了性能指标,并用于进一步分析输出。根据组织分类确定了决策边界。结果:卷积神经网络已成功地对790名患者进行了1790帧的培训,并在14例患者的24帧中进行了验证。训练集的准确性为100%,验证集为80%。训练集的召回和精度分别为100%和100%,验证集分别为68.8%和91.7%。结论:具有高度准确性的术中荧光血管造影的自动分类是可能的,并且允许自动决策边界识别。这将使外科医生能够标准化荧光血管造影技术。基于Web的应用程序可用于部署该算法。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
自动手术场景细分是促进现代手术剧院认知智能的基础。以前的作品依赖于常规的聚合模块(例如扩张的卷积,卷积LSTM),仅利用局部环境。在本文中,我们提出了一个新颖的框架STSWINCL,该框架通过逐步捕获全球环境来探讨互补的视频内和访问间关系以提高细分性能。我们首先开发了层次结构变压器,以捕获视频内关系,其中包括来自邻居像素和以前的帧的富裕空间和时间提示。提出了一个联合时空窗口移动方案,以有效地将这两个线索聚集到每个像素嵌入中。然后,我们通过像素到像素对比度学习探索视频间的关系,该学习很好地结构了整体嵌入空间。开发了一个多源对比度训练目标,可以将视频中的像素嵌入和基础指导分组,这对于学习整个数据的全球属性至关重要。我们在两个公共外科视频基准测试中广泛验证了我们的方法,包括Endovis18 Challenge和Cadis数据集。实验结果证明了我们的方法的有希望的性能,这始终超过了先前的最新方法。代码可在https://github.com/yuemingjin/stswincl上找到。
translated by 谷歌翻译